Goto

Collaborating Authors

 accumulative metric


0c72cb7ee1512f800abe27823a792d03-Supplemental.pdf

Neural Information Processing Systems

However, for the recommender system experiment, there are no natural representations for the candidate models. IS-g/DR-g Off-policy evaluation (OPE) methods can provide an estimate of the accumulative metric. The resulting methods aredenoted asIS-EI andDR-EIrespectively. Asthere arelimited information tobegained byrepeatedly deploying thesame model online, we exclude the models that have been deployed when choosing the next model to deploy for all the methodsincludingAOE. We simulate the "online" deployment scenario as follows: a multi-class classifier is given a set of inputs; for each input, the classifier returns a prediction of the label and only a binary immediate feedback about whether the predicted class is correct is available. They-axisshowsthe gap in the accumulativemetric between the optimal model and the estimated best model by each method.







Review for NeurIPS paper: Model Selection for Production System via Automated Online Experiments

Neural Information Processing Systems

Summary and Contributions: The paper proposes a model selection algorithm called Model Selection with Automated Online Experiments (AOE) that is designed for use in production systems. In the problem statement, it is stated that the goal of the model selection problem is to select the model from a set of candidate models that maximises a metric of interest. It is assumed that the metric of interest can be expressed as the average immediate feedback from each of a model's predictions. AOE uses both historical log data and data collected from a small budget of online experiments to inform the choice of model. A distribution for the accumulative metric, or expected immediate feedback, is derived.


Model Selection for Production System via Automated Online Experiments

Dai, Zhenwen, Chandar, Praveen, Fazelnia, Ghazal, Carterette, Ben, Lalmas-Roelleke, Mounia

arXiv.org Machine Learning

A challenge that machine learning practitioners in the industry face is the task of selecting the best model to deploy in production. As a model is often an intermediate component of a production system, online controlled experiments such as A/B tests yield the most reliable estimation of the effectiveness of the whole system, but can only compare two or a few models due to budget constraints. We propose an automated online experimentation mechanism that can efficiently perform model selection from a large pool of models with a small number of online experiments. We derive the probability distribution of the metric of interest that contains the model uncertainty from our Bayesian surrogate model trained using historical logs. Our method efficiently identifies the best model by sequentially selecting and deploying a list of models from the candidate set that balance exploration-exploitation. Using simulations based on real data, we demonstrate the effectiveness of our method on two different tasks.